enabling deep hierarchical image-to-image translation
Supplementary Material DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs
BigGAN to learn more detailed network information. The learning rate of the generator is 0.0001, and the one of the encoder, adaptor and discriminator is 0.0004 with exponential decay We also evaluate our method using fewer animal faces. Interpolation by keeping the input image fixed while interpolating between two class embeddings. The first column is the input images, while the remaining columns are the interpolated results. Further results on the Animal faces dataset.
- Oceania > New Zealand > South Island > Marlborough District > Blenheim (0.06)
- North America > Canada (0.05)
DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs
Image-to-image translation has recently achieved remarkable results. But despite current success, it suffers from inferior performance when translations between classes require large shape changes. We attribute this to the high-resolution bottlenecks which are used by current state-of-the-art image-to-image methods. Therefore, in this work, we propose a novel deep hierarchical Image-to-Image Translation method, called DeepI2I. We learn a model by leveraging hierarchical features: (a) structural information contained in the bottom layers and (b) semantic information extracted from the top layers.